翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

mutual information : ウィキペディア英語版
mutual information

In probability theory and information theory, the mutual information (MI) or (formerly) transinformation of two random variables is a measure of the variables' mutual dependence. Not limited to real-valued random variables like the correlation coefficient, MI is more general and determines how similar the joint distribution p(X,Y) is to the products of factored marginal distribution p(X)p(Y). MI is the expected value of the pointwise mutual information (PMI). The most common unit of measurement of mutual information is the bit.
== Definition of mutual information ==
Formally, the mutual information of two discrete random variables ''X'' and ''Y'' can be defined as:
: I(X;Y) = \sum_ \sum_
p(x,y) \log
\right) }, \,\!

where ''p''(''x'',''y'') is the joint probability distribution function of ''X'' and ''Y'', and p(x) and p(y) are the marginal probability distribution functions of ''X'' and ''Y'' respectively.
In the case of continuous random variables, the summation is replaced by a definite double integral:
: I(X;Y) = \int_Y \int_X
p(x,y) \log
\right) } \; dx \,dy,

where ''p''(''x'',''y'') is now the joint probability ''density'' function of ''X'' and ''Y'', and p(x) and p(y) are the marginal probability density functions of ''X'' and ''Y'' respectively.
If the log base 2 is used, the units of mutual information are the bit.
Intuitively, mutual information measures the information that ''X'' and ''Y'' share: it measures how much knowing one of these variables reduces uncertainty about the other. For example, if ''X'' and ''Y'' are independent, then knowing ''X'' does not give any information about ''Y'' and vice versa, so their mutual information is zero. At the other extreme, if ''X'' is a deterministic function of ''Y'' and ''Y'' is a deterministic function of ''X'' then all information conveyed by ''X'' is shared with ''Y'': knowing ''X'' determines the value of ''Y'' and vice versa. As a result, in this case the mutual information is the same as the uncertainty contained in ''Y'' (or ''X'') alone, namely the entropy of ''Y'' (or ''X''). Moreover, this mutual information is the same as the entropy of ''X'' and as the entropy of ''Y''. (A very special case of this is when ''X'' and ''Y'' are the same random variable.)
Mutual information is a measure of the inherent dependence expressed in the joint distribution of ''X'' and ''Y'' relative to the joint distribution of ''X'' and ''Y'' under the assumption of independence.
Mutual information therefore measures dependence in the following sense: ''I''(''X''; ''Y'') = 0 if and only if ''X'' and ''Y'' are independent random variables. This is easy to see in one direction: if ''X'' and ''Y'' are independent, then ''p''(''x'',''y'') = ''p''(''x'') ''p''(''y''), and therefore:
: \log \right) } = \log 1 = 0. \,\!

Moreover, mutual information is nonnegative (i.e. ''I''(''X'';''Y'') ≥ 0; see below) and symmetric (i.e. ''I''(''X'';''Y'') = ''I''(''Y'';''X'')).

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「mutual information」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.